10 research outputs found

    Critical Song Features for Auditory Pattern Recognition in Crickets

    Get PDF
    Many different invertebrate and vertebrate species use acoustic communication for pair formation. In the cricket Gryllus bimaculatus, females recognize their species-specific calling song and localize singing males by positive phonotaxis. The song pattern of males has a clear structure consisting of brief and regular pulses that are grouped into repetitive chirps. Information is thus present on a short and a long time scale. Here, we ask which structural features of the song critically determine the phonotactic performance. To this end we employed artificial neural networks to analyze a large body of behavioral data that measured females’ phonotactic behavior under systematic variation of artificially generated song patterns. In a first step we used four non-redundant descriptive temporal features to predict the female response. The model prediction showed a high correlation with the experimental results. We used this behavioral model to explore the integration of the two different time scales. Our result suggested that only an attractive pulse structure in combination with an attractive chirp structure reliably induced phonotactic behavior to signals. In a further step we investigated all feature sets, each one consisting of a different combination of eight proposed temporal features. We identified feature sets of size two, three, and four that achieve highest prediction power by using the pulse period from the short time scale plus additional information from the long time scale

    Neural representation of calling songs and their behavioral relevance in the grasshopper auditory system

    Get PDF
    Acoustic communication plays a key role for mate attraction in grasshoppers. Males use songs to advertise themselves to females. Females evaluate the song pattern, a repetitive structure of sound syllables separated by short pauses, to recognize a conspecific male and as proxy to its fitness. In their natural habitat females often receive songs with degraded temporal structure. Perturbations may, for example, result from the overlap with other songs. We studied the response behavior of females to songs that show different signal degradations. A perturbation of an otherwise attractive song at later positions in the syllable diminished the behavioral response, whereas the same perturbation at the onset of a syllable did not affect song attractiveness. We applied naïve Bayes classifiers to the spike trains of identified neurons in the auditory pathway to explore how sensory evidence about the acoustic stimulus and its attractiveness is represented in the neuronal responses. We find that populations of three or more neurons were sufficient to reliably decode the acoustic stimulus and to predict its behavioral relevance from the single-trial integrated firing rate. A simple model of decision making simulates the female response behavior. It computes for each syllable the likelihood for the presence of an attractive song pattern as evidenced by the population firing rate. Integration across syllables allows the likelihood to reach a decision threshold and to elicit the behavioral response. The close match between model performance and animal behavior shows that a spike rate code is sufficient to enable song pattern recognition.Peer Reviewe

    Zeitliche Aspekte der Verarbeitung von Balzgesängen in Orthoptera

    No full text
    The central aim of computational neuroscience is to understand the computational principles underlying the neuronal processing of information. In this thesis experimental data is analyzed with methods from machine learning to contribute to a better understanding of the processing of calling songs in crickets and grasshoppers. Which features of cricket calling songs are critical for the recognition and evaluation of a potential mating partner? Chapter 2 investigates this question by analyzing a large body of behavioral data recorded from female crickets during phonotactic experiments with artificial neural networks. Models are presented that quantitatively predict the experimental measure of phonotactic behavior for a given set of feature values from calling songs. The model predictions allow to identify minimal feature sets that best describe the behavioral data. How is information about a calling song, i.e. the song identity and its attractiveness, encoded in the neuronal pattern of auditory neurons in grasshoppers? To answer this question behavioral as well as electrophysiological data of auditory neurons are analyzed with naive Bayes classifiers in chapter 3. It is shown that information about a stimulus is encoded in the spike count of populations of neurons.Zentraler Untersuchungsgegenstand von 'computational neuroscience' sind die Verrechnungsprinzipien, welche der neuronalen Informationsverarbeitung zugrunde liegen. In dieser Doktorarbeit werden experimentelle Daten mit Methoden des maschinellen Lernens analysiert um zu einem besseren Verständnis der Verarbeitung von Balzgesängen in Grillen und Grashüpfern beizutragen. Welche Charakteristika der Balzgesänge von Grillen sind ausschlaggebend für die Erkennung und Beurteilung potentieller Paarungspartner? Kapitel 2 untersucht diese Fragestellung durch Analyse von Verhaltensdaten mit künstlichen neuronalen Netzen. Es werden Modelle präsentiert die das phonotaktische Verhalten der Weibchen auf Basis beschreibender Größen eines Gesanges quantitativ vorhersagen. Diese Vorhersagen ermöglichen Teilmengen von mehreren Charakteristika zu identifizieren welche die Verhaltensdaten am besten beschreiben. Wie sind Informationen eines Balzgesanges, z.B. die Identität oder die Attraktivität, in den neuronalen Antworten auditorischer Neurone in Grashüpfern kodiert? Um diese Frage zu beantworten werden in Kapitel 3 Verhaltensdaten und elektrophysiologische Daten mit Bayes Klassifikatoren untersucht. Es wird gezeigt, dass Informationen über den Gesang in der Anzahl der Aktionspotentiale von Populationen auditorischer Neurone kodiert ist

    Ten best performing models.

    No full text
    <p>Model of size four (light gray), three (dark gray), and two (black edging) are ranked top ten. The overall best performing model uses the pulse period, chirp duration, and chirp duty cycle. The best 2-feature model (pulse period and chirp pause) did not perform significantly different ( for a two-sided Wilcoxon rank-sums test; significance level of 0.01). The best 4-feature model (pulse duration, pulse pause, chirp duration, and chirp period) performed significantly worse than the best 3-feature model ( for a two-sided Wilcoxon rank-sums test). Abbreviations: Pdur - pulse duration, Ppau - pulse pause, Pper - pulse period, Pdc - pulse duty cycle, Cdur - chirp duration, Cpau - chirp pause, Cper - chirp period, Cdc - chirp duty cycle. The models were validated 100 times and errorbars indicate standard deviation.</p

    Artificial song pattern of the cricket <i>Gryllus bimaculatus</i> and its temporal features.

    No full text
    <p>Typically, a calling song consists of repetitive pulses that are grouped into chirps. The temporal structure of an artificial song pattern is fully determined by four descriptors, e.g. the duration and pause for both pulses and chirps. Four additional descriptors are frequently used to characterize cricket songs, namely the period (the sum of duration and pause), and the duty cycle (the ratio of duration and period) for both, the short and the long time scale.</p

    Interaction of the short and long time scale.

    No full text
    <p>(A) Sketch of a logical AND-operation (central square) and an OR-operation (gray shading). (B) Chirp period - pulse period response field predicted by the best 4-feature model (pulse duration, pulse pause, chirp duration, chirp period). The dominant circular area of highest response values suggests an AND-operation. Circles indicate experimentally measured phonotactic scores.</p

    Network diagram and predictive performance of the best 4-feature model.

    No full text
    <p>(A) The network diagram consists of four input neurons representing temporal calling song features, which project to input-evaluating neurons in the hidden layer. These in turn project to the output neuron mimicking the relative phonotactic score; abbreviations: Pdur - pulse duration, Ppau - pulse pause, Cdur - chirp duration, Cper - chirp period. (B) Correlation between the phonotactic score of 18 test samples predicted by the best 4-feature model and the experimentally measured scores. Each dot shows the mean phonotactic score for a given song pattern that was presented to on average 31 females and tested for 100 times with the model. The errorbars indicate standard deviation across individual females (horizontal) and across 100 repeated model simulations (vertical). The solid regression line has a slope of 0.73. The performance: and .</p
    corecore